Goto

Collaborating Authors

 online representation


Impression learning: Online representation learning with synaptic plasticity

Neural Information Processing Systems

Understanding how the brain constructs statistical models of the sensory world remains a longstanding challenge for computational neuroscience. Here, we derive an unsupervised local synaptic plasticity rule that trains neural circuits to infer latent structure from sensory stimuli via a novel loss function for approximate online Bayesian inference. The learning algorithm is driven by a local error signal computed between two factors that jointly contribute to neural activity: stimulus drive and internal predictions --- the network's'impression' of the stimulus.



Impression learning Online representation learning with synaptic plasticity Appendices

Neural Information Processing Systems

Our derivation of the update for IL (Eq. 3) is based on an expansion of log We examine the consequences of this bias formula for our specific model. Note that the update term in Eq. (S1) is However, we will show in Appendix C that these updates may have high variance. 'reparameterization trick,' in which a change of variables allows the use of stochastic gradient descent It is worth noting that this'reparameterization' will work only for additive Gaussian noise. As already mentioned, WS can be viewed as a special case of IL. Since WS is a special case of IL, the bias properties of its individual samples are identical.


Impression learning: Online representation learning with synaptic plasticity

Neural Information Processing Systems

Understanding how the brain constructs statistical models of the sensory world remains a longstanding challenge for computational neuroscience. Here, we derive an unsupervised local synaptic plasticity rule that trains neural circuits to infer latent structure from sensory stimuli via a novel loss function for approximate online Bayesian inference. The learning algorithm is driven by a local error signal computed between two factors that jointly contribute to neural activity: stimulus drive and internal predictions --- the network's'impression' of the stimulus. We show that learning can be implemented online, is capable of capturing temporal dependencies in continuous input streams, and generalizes to hierarchical architectures. Furthermore, we demonstrate both analytically and empirically that the algorithm is more data-efficient than a three-factor plasticity alternative, enabling it to learn statistics of high-dimensional, naturalistic inputs.


Impression learning: Online representation learning with synaptic plasticity

Neural Information Processing Systems

Understanding how the brain constructs statistical models of the sensory world remains a longstanding challenge for computational neuroscience. Here, we derive an unsupervised local synaptic plasticity rule that trains neural circuits to infer latent structure from sensory stimuli via a novel loss function for approximate online Bayesian inference. The learning algorithm is driven by a local error signal computed between two factors that jointly contribute to neural activity: stimulus drive and internal predictions --- the network's'impression' of the stimulus. We show that learning can be implemented online, is capable of capturing temporal dependencies in continuous input streams, and generalizes to hierarchical architectures. Furthermore, we demonstrate both analytically and empirically that the algorithm is more data-efficient than a three-factor plasticity alternative, enabling it to learn statistics of high-dimensional, naturalistic inputs.